SYDE 556/750: Simulating Neurobiological Systems

Spatial Representation

  • A common feature that researchers find in the activity of real neurons is that there's some sort of spatial structure in how the neurons are organized
    • Topographic maps
  • Example: retinotopic maps (the layout of neurons maps onto some spatial organization from the retina):

  • Example: auditory cortex organized by frequency sensitivity

  • Example: superior colliculus for eye movement

  • How do we fit this into the NEF?

Spatial structure and encoders

  • One aspect is nice and easy
  • Instead of randomly choosing encoders, organize them in a nice way
    • the location of the neuron relates to its encoder
  • Nothing else changes
  • There's even an option in the interactive mode interface to automatically do this (even while a model is running)
  • Does this solution handle everything?
    • What's missing?

Consider vision

  • Consider the visual example
    • The simplest approach would be to say each neuron has some preferred point in space it responds to
    • 2-dimensional representation $x = <h, v>$
    • each neuron gets an encoder $e$ based on its position
    • $a = G[\alpha e \cdot x + J^{bias}]$
  • That's not quite going to work
    • a point $x$ farther away from the origin, but in the same direction as $e$ will make this neuron fire faster
    • need to do a change of variables and add an extra dimension
    • $x = <h, v, \sqrt{1 - h^2 -v^2}>$
  • Now I can give a point as input and the neurons in the correct area of the visual cortex will fire
  • What's missing here?

Complex representations

  • But when you see things, you get more than one input at a time
  • What happens if I see a dot at $<h_0, v_0>$ and another dot at $<h_1, v_1>$ at the same time?
  • Is that the same as seeing one dot at $<h_0 + h_1, v_0 + v_1>$?
    • No
  • We see them as two distinct points
  • Same thing with auditory (multiple frequencies at once, plus other sound features)
  • Maybe even the same thing with the superior colliculus and eye movements (although that's less clear)

  • So what should this representation be?

    • Not just 2-dimensional
    • Need to represent a whole image

Grid-based representation

  • We know how to represent vectors, so why not just break the visual area up into pixels, and have one dimension per pixel?
    • So for a 3x3 image of a square, the $x$ value might be $<1,1,1,1,0,1,1,1,1>$
    • Can do brightness (and maybe even colour) this way too
  • Now choose encoders based on location
  • Simplest approach: each encoder looks like $<0,0,0,1,0,0,0,0,0>$
    • This is equivalent to nine separate ensembles
    • What are the limitations of this approach?
    • How can we improve this?

Changing how we choose encoders

  • Perfectly aligned encoders: $<0,0,0,1,0,0,0,0,0>$
  • Randomly chosen encoders: $<0.3, -0.1, 0.02, 0.1, 0.7, -0.2, -0.3, 0.001, -0.2>$
  • Can also go inbetween: $<0.25, 0, 0, 0.5, 0.25, 0, 0.25, 0, 0>$
  • What is that?
    • A neuron that responds mostly to dots in one location, but also a little bit to dots nearby
  • Why would we do this?
    • What computational advantage do we get?

Computation with sparse local encoders

  • With perfectly aligned encoders, we can only do linear functions of $x$
  • With randomly chosen encoders, we can do any function of $x$, but the accuracy decreases very quickly as we get up to higher-order terms (since there are many pairs of variables)
  • With encoders that are local (i.e. all the non-zero terms are clustered together) and sparse (i.e. fairly few non-zero terms), we can compute fairly complex function, as long as they are local (i.e. they're dependent on things that are spatially nearby)
    • So we'll be worse at things that require us to relate two points far away in space, but better at things where the points are nearby
    • This seems like a reasonable tradeoff

Does the brain do this?

  • Looks to be the case
  • Very common way to look at neurons in the visual field
  • Find the stimulus a neuron most responds to

  • Usually modelled as a gabor filter (a 2-D sine wave times a Gaussian):

  • Pretty much any vision machine learning algorithm gives things like this

  • So this could be a good way to generate encoders for visual stimuli
    • And maybe even other stimuli too

Implementation

  • Let's start with making gabors to use as our encoders

In [6]:
import numpy

def gabor(width, height, lambd, theta, psi, sigma, gamma, x_offset, y_offset):
    x = numpy.linspace(-1, 1, width)
    y = numpy.linspace(-1, 1, height)
    X, Y = numpy.meshgrid(x, y)
    X = X - x_offset
    Y = Y + y_offset

    cosTheta = numpy.cos(theta)
    sinTheta = numpy.sin(theta)
    xTheta = X * cosTheta  + Y * sinTheta
    yTheta = -X * sinTheta + Y * cosTheta
    e = numpy.exp( -(xTheta**2 + yTheta**2 * gamma**2) / (2 * sigma**2) )
    cos = numpy.cos(2 * numpy.pi * xTheta / lambd + psi)
    return e * cos

g = gabor(500, 500, lambd=0.6, 
                    theta=numpy.pi/4, 
                    psi=numpy.pi/2, 
                    sigma=0.2, 
                    gamma=0.7,
                    x_offset=0,
                    y_offset=0)
import pylab
pylab.imshow(g, extent=(-1, 1, -1, 1), interpolation='none', vmin=-1, vmax=1, cmap='gray')
pylab.show()


  • Now make a bunch of them

In [7]:
width = 50
height = 50

N = 100

def make_random_gabor(width, height):
    return gabor(width, height, 
                  lambd=random.uniform(0.3, 0.8),
                  theta=random.uniform(0, 2*numpy.pi),
                  psi=random.uniform(0, 2*numpy.pi),
                  sigma=random.uniform(0.2, 0.5),
                  gamma=random.uniform(0.4, 0.8),
                  x_offset=random.uniform(-1, 1),
                  y_offset=random.uniform(-1, 1))

encoders = [make_random_gabor(width, height) for i in range(N)]
import pylab
pylab.figure(figsize=(10,8))
for i in range(N):
    w = i%12
    h = i/12
    pylab.imshow(encoders[i], extent=(w, w+0.95, h, h+0.95), interpolation='none',
                 vmin=-1, vmax=1, cmap='gray')
    pylab.xticks([])
    pylab.yticks([])
pylab.xlim((0, 12))
pylab.ylim((0, N/12))
    
pylab.show()


  • now let's make some neurons and find some decoders
  • to find decoders, we need to choose $x$ values to sample over
    • what should we choose?

In [8]:
S = 100
width = 50
height = 50

x = numpy.random.normal(size=(width*height, S))

import pylab
pylab.figure(figsize=(10,8))
for i in range(S):
    w = i%12
    h = i/12
    img = x[:,i]
    img.shape = width, height
    pylab.imshow(img, extent=(w, w+0.95, h, h+0.95), interpolation='none',
                 vmin=-1, vmax=1, cmap='gray')
    pylab.xticks([])
    pylab.yticks([])
pylab.xlim((0, 12))
pylab.ylim((0, S/12))
    
pylab.show()


  • Why might that be a bad idea?
  • Other options?

In [9]:
S = 100

width = 50
height = 50

def normalize(v):
    return v/numpy.linalg.norm(v)


C = 4
x = [normalize(numpy.sum([make_random_gabor(width, height) for i in range(C)],axis=0).flatten()) for j in range(S)]
x = numpy.array(x).T

import pylab
pylab.figure(figsize=(10,8))
for i in range(S):
    w = i%12
    h = i/12
    img = x[:,i]
    img.shape = height, width
    pylab.imshow(img, extent=(w, w+0.95, h, h+0.95), interpolation='none',
                 vmin=-0.1, vmax=0.1, cmap='gray')
    pylab.xticks([])
    pylab.yticks([])
pylab.xlim((0, 12))
pylab.ylim((0, S/12))
    
pylab.show()



In [10]:
import syde556

S = 100
x = []
for i in range(S):
    sx, A = syde556.generate_signal(T=width, dt=1.0, rms=1, limit=0.05, seed=i)
    sy, A = syde556.generate_signal(T=height, dt=1.0, rms=1, limit=0.05, seed=i+2000)
    SX, SY = numpy.meshgrid(sx, sy)
    img = SX*SY
    x.append(normalize(img.flatten()))
x = numpy.array(x).T


import pylab
pylab.figure(figsize=(10,8))
for i in range(S):
    w = i%12
    h = i/12
    img = x[:,i]
    img.shape = height, width
    pylab.imshow(img, extent=(w, w+0.95, h, h+0.95), interpolation='none',
                 vmin=-0.1, vmax=0.1, cmap='gray')
    pylab.xticks([])
    pylab.yticks([])
pylab.xlim((0, 12))
pylab.ylim((0, S/12))
    
pylab.show()


  • Now let's build the neurons and see how good they are

In [11]:
import syde556
reload(syde556)

N = 100
width = 50
height = 50

encoders = [make_random_gabor(width, height).flatten() for i in range(N)]

vision = syde556.Ensemble(neurons=N, dimensions=width*height, encoders=encoders)

decoder = vision.compute_decoder(x, x, noise=0.1)

A, xhat = vision.simulate_rate(x, decoder)

In [12]:
import pylab
pylab.figure(figsize=(10,12))
for i in range(S):
    w = i%6
    h = i/6
    img = x[:,i]
    img.shape = height, width
    pylab.imshow(img, extent=(2*w, 2*w+0.9, h, h+0.9), interpolation='none',
                 vmin=-0.1, vmax=0.1, cmap='gray')
    img = xhat[:,i]
    img.shape = height, width
    pylab.imshow(img, extent=(2*w+0.95, 2*w+1.85, h, h+0.9), interpolation='none',
                 vmin=-0.1, vmax=0.1, cmap='gray')
    pylab.xticks([])
    pylab.yticks([])
pylab.xlim((0, 12))
pylab.ylim((0, S/6))
    
pylab.show()


Pixel size

  • What size of pixel should we use?
  • How does that affect things?

In [17]:
width = 50
height = 50

# make samples
S = 100
x = []
for i in range(S):
    sx, A = syde556.generate_signal(T=width, dt=1.0, rms=1, limit=2.5/width, seed=i)
    sy, A = syde556.generate_signal(T=height, dt=1.0, rms=1, limit=2.5/height, seed=i+2000)
    SX, SY = numpy.meshgrid(sx, sy)
    img = SX*SY
    x.append(normalize(img.flatten()))
x = numpy.array(x).T
print x.shape

#make neurons
N = 500

encoders = [make_random_gabor(width, height).flatten() for i in range(N)]

vision = syde556.Ensemble(neurons=N, dimensions=width*height, encoders=encoders)

decoder = vision.compute_decoder(x, x, noise=0.1)

A, xhat = vision.simulate_rate(x, decoder)


scale = 5/np.sqrt(width*height)
#plot results
import pylab
pylab.figure(figsize=(10,12))
for i in range(S):
    w = i%6
    h = i/6
    img = x[:,i]
    img.shape = height, width
    pylab.imshow(img, extent=(2*w, 2*w+0.9, h, h+0.9), interpolation='none',
                 vmin=-scale, vmax=scale, cmap='gray')
    img = xhat[:,i]
    img.shape = height, width
    pylab.imshow(img, extent=(2*w+0.95, 2*w+1.85, h, h+0.9), interpolation='none',
                 vmin=-scale, vmax=scale, cmap='gray')
    pylab.xticks([])
    pylab.yticks([])
pylab.xlim((0, 12))
pylab.ylim((0, S/6))
    
pylab.show()


(2500L, 100L)
  • Why is this the case?
  • What would happen if we went up to an infinite number of pixels?
  • How could we implement this?

Representing Continuous Spaces

  • It seems unnatural to pick some particular number of pixels
    • especially since actual vision doesn't really have anything like pixels
  • So what can we do instead?
  • Let $x$ be a continuous function, rather than a vector
  • What does that mean?
    • Instead of representing $x = <0, 0.5, 1, 0.5, 0, -0.5, -1, -0.5>$
    • we might have $x = sin(v)$ where $v \in [0, 2\pi)$
  • What else would have to change?
    • $e$ would also be a function (over the same range of $v$ values)
    • instead of $x \cdot e$, we would do $\int x(v) e(v) dv$
    • normalize so that $\int e(v) e(v) dv=1$
  • Problems:
    • How do we do that integral?
    • Would have to do it analytically
    • Seems restrictive
    • If we do it analytically and approximate the integral with some finite $dv$ then we're doing exactly the same thing as if we had have had some large (but finite) number of pixels
  • How else can we represent a function?

Representing a function

  • Here's another way to represent a function:
    • $f(v) = \sum_i c_i b_i(v)$
    • where $b_i(v)$ is the set of basis functions
  • Example: the polynomials:
    • $b_i(v) = v^i$
    • so I might have $f(v)= v^3 + 3 v^2 + 3v +1$
    • so the coefficients $c$ are $<1, 3, 3, 1>$
    • This means we can represent $f(v)= v^3 + 3 v^2 + 3v +1$ using the vector $<1, 3, 3, 1>$
    • And we know how to represent vectors with the NEF
  • Is that all we need?
  • Still have to be able to compute $\int x(v) e(v) dv$

  • $\int x(v) e(v) dv$

  • $\int (\sum_i c_{x,i} b_i(v)) (\sum_i c_{e,i} b_i(v)) dv$
  • $\int (\sum_i c_{x,i} c_{e,i} b_i(v)^2)dv + (\mbox{ ... all the cross terms ... })$
  • so those cross terms are ugly
    • can we get rid of them?
  • Let's assume $b_i(v)$ are orthogonal

    • $\int b_i(v) b_j(v) dt$ is zero for $i \neq j$
    • all the cross terms disappear
  • $\int (\sum_i c_{x,i} c_{e,i} b_i(v)^2)dv$

  • now rearrange, pulling the sum out
  • $\sum c_{x,i} c_{e,i} \int b_i(v)^2 dv$
  • what if we make the basis functions have unit length?
    • $\int b_i(v)^2 dv = 1$
  • so $\int x(v) e(v) dv$ becomes $\sum c_{x,i} c_{e,i}$, which is just $c_x \cdot c_e$
    • so our dot product operation on the functions just becomes exactly the dot product of the vector of coefficients
    • so we won't have to change any of our code at all!
  • the only works assuming the basis vectors $b_i(v)$ are:
    • orthogonal: $\int b_i(v) b_j(v) dt=0$ for $i \neq j$
    • normalized: $\int b_i(v)^2 dv = 1$
  • this is called an orthonormal basis

An othonormal basis

  • Let's go back to the polynomial example
  • $b_i(v) = v^i$
  • Are these orthogonal?
    • $\int b_i(v) b_j(v) dt=0$ for $i \neq j$
    • $\int v^i v^j dt=0$ for $i \neq j$
    • $\int v^{i+j} dt=0$ for $i \neq j$
    • let's pick a range of $v \in [-1,1]$
    • ${1 \over {i+j+1}} [(1)^{i+j+1} - (-1)^{i+j+1}]$
    • that'll only be zero if $i+j+1$ is even....

In [134]:
import numpy
v = numpy.linspace(-1, 1, 1000)
b_i = v**1
b_j = v**2

import pylab
pylab.plot(v, b_i, label='$b_i$')
pylab.plot(v, b_j, label='$b_j$')
pylab.plot(v, b_i*b_j, label='$b_i b_j$')
pylab.legend(loc='best')
xlabel('$v$')
pylab.show()
dv = 2.0/len(v)

print 'sum:', numpy.sum(b_i*b_j*dv)


sum: 3.90312782095e-18
  • So what could we use instead?
  • There is an orthogonal basis set that's fairly similar to the polynomials
    • We've even seen them before
    • The Legendgre polynomials

In [135]:
for i in range(5):
    plot(v, numpy.polynomial.legendre.legval(v, numpy.eye(5)[i]))
xlabel('$v$')

show()


  • let's make sure they're orthogonal

In [136]:
import numpy
v = numpy.linspace(-1, 1, 1000)
i = 1
j = 2
b_i = numpy.polynomial.legendre.legval(v, numpy.eye(i+1)[i])
b_j = numpy.polynomial.legendre.legval(v, numpy.eye(j+1)[j])

import pylab
pylab.plot(v, b_i, label='$b_i$', linewidth=3)
pylab.plot(v, b_j, label='$b_j$', linewidth=3)
pylab.plot(v, b_i*b_j, label='$b_i b_j$')
pylab.legend(loc='best')
xlabel('$v$')
pylab.show()
dv = 2.0/len(v)
print 'sum:', numpy.sum(b_i*b_j*dv)


sum: 9.28077059648e-17
  • Are they normalized?

In [137]:
v = numpy.linspace(-1, 1, 1000)
dv = 2.0/len(v)

length = []
for i in range(10):
    b_i = numpy.polynomial.legendre.legval(v, numpy.eye(i+1)[i])
    length.append(sum(b_i*b_i)*dv)

plot(length)  
xlabel('$i$')
show()


  • Nope.
  • The area between -1 and 1 for $b_i$ is $2/(2i+1)$
  • So we rescale

In [138]:
v = numpy.linspace(-1, 1, 1000)
dv = 2.0/len(v)

length = []
for i in range(10):
    b_i = numpy.polynomial.legendre.legval(v, numpy.eye(i+1)[i])
    b_i = b_i * numpy.sqrt((2*i + 1)/2.0)
    length.append(sum(b_i*b_i)*dv)

plot(length)  
xlabel('$i$')
ylim(0, 2)
show()


  • What do these scaled basis functions look like?

In [294]:
for i in range(5):
    plot(v, numpy.polynomial.legendre.legval(v, numpy.eye(i+1)[i])* numpy.sqrt((2*i + 1)/2.0))
xlabel('$v$')    
show()


  • So, let's say I want a neural population that can represent polynomials
  • Here are a set of randomly generated $x$ values showing example functions to represent

In [295]:
D = 5
S = 20
x_poly = numpy.random.randn(D, S)/numpy.sqrt(D)

v = numpy.linspace(-1, 1, 1000)

for i in range(S):
    plot(v, numpy.polynomial.polynomial.polyval(v, x_poly[:,i]))
xlabel('$v$')    
show()


  • Now we convert those into our basis space

In [296]:
x = []
for i in range(S):
    x.append(numpy.polynomial.legendre.poly2leg(x_poly[:,i])/numpy.sqrt((2*i + 1)/2.0))
x = numpy.array(x).T

v = numpy.linspace(-1, 1, 1000)

for i in range(S):
    plot(v, numpy.polynomial.legendre.legval(v, x[:,i])*numpy.sqrt((2*i + 1)/2.0))
xlabel('$v$')
show()


  • Now let's make a neural population to represent these
  • It's just a perfectly normal 5-dimensional population
    • We still pick encoders randomly

In [297]:
N = 50

ensemble = syde556.Ensemble(neurons=N, dimensions=5)

decoder = ensemble.compute_decoder(x, x, noise=0.1)

A, xhat = ensemble.simulate_rate(x, decoder)

figure(figsize=(10,4))
subplot(1, 2, 1)
for i in range(S):
    plot(v, numpy.polynomial.legendre.legval(v, x[:,i])*numpy.sqrt((2*i + 1)/2.0))
ylim(-3, 3)
xlabel('$v$')
title('original')
subplot(1, 2, 2)
for i in range(S):
    plot(v, numpy.polynomial.legendre.legval(v, xhat[:,i])*numpy.sqrt((2*i + 1)/2.0))
ylim(-3, 3)
xlabel('$v$')
title('decoded')
show()


  • We can do representation
  • Can we do computation?
  • Same way as before
    • All we have to do is define what function we want to approximate
  • Let's start with computing the area under the curve

In [298]:
def area(x):
    fv = numpy.polynomial.legendre.legval(v, x)*numpy.sqrt((2*i + 1)/2.0)
    a = sum(fv)*dv
    return [a]

fx = numpy.array([area(xx) for xx in x.T]).T
    
decoder = ensemble.compute_decoder(x, fx, noise=0.1)

A, fxhat = ensemble.simulate_rate(x, decoder)

for i in range(10):
    print x[:,i], fx[:,i], fxhat[:,i]


[-0.45249468  0.06689621  0.65862321 -0.22781363  0.04283414] [-3.99012265] [-3.692147]
[-0.13508849 -0.45085403 -0.29704591 -0.14351425 -0.04635415] [-1.19610606] [-0.99389806]
[ 0.03477385  0.59657275 -0.18919625  0.00530379 -0.06779654] [ 0.30484099] [ 0.29829337]
[ 0.15631005 -0.07939436 -0.10217138 -0.09528619 -0.02525089] [ 1.37936599] [ 1.31180309]
[-0.45944895  0.02760009 -0.27678184  0.08308695 -0.0547291 ] [-4.06067516] [-3.88563593]
[ 0.25147024  0.0522454   0.02486886  0.11510253  0.03265593] [ 2.22143421] [ 1.61950481]
[ 0.03977521 -0.06903214  0.03796262  0.09403689  0.02458351] [ 0.35183856] [ 0.24418011]
[-0.34478934  0.26520947 -0.15590749  0.04113701 -0.00300367] [-3.04650196] [-2.60464404]
[-0.08437981  0.28442028  0.19314713  0.05949096 -0.01813703] [-0.74367552] [-0.55534047]
[-0.01543183 -0.11761898  0.06478489  0.07507399 -0.00886165] [-0.13579599] [-0.09870697]
  • What about finding the maximum?

In [300]:
def maximum(x):
    fv = numpy.polynomial.legendre.legval(v, x)*numpy.sqrt((2*i + 1)/2.0)
    index = numpy.argmax(fv)
    return [v[index]]

fx = numpy.array([maximum(xx) for xx in x.T]).T
    
decoder = ensemble.compute_decoder(x, fx, noise=0.01)

A, fxhat = ensemble.simulate_rate(x, decoder)

for i in range(20):
    print x[:,i], fx[:,i], fxhat[:,i]


[-0.45249468  0.06689621  0.65862321 -0.22781363  0.04283414] [-1.] [-1.03426409]
[-0.13508849 -0.45085403 -0.29704591 -0.14351425 -0.04635415] [-0.94394394] [-0.94790649]
[ 0.03477385  0.59657275 -0.18919625  0.00530379 -0.06779654] [ 0.78178178] [ 0.80341805]
[ 0.15631005 -0.07939436 -0.10217138 -0.09528619 -0.02525089] [ 0.21721722] [ 0.03715061]
[-0.45944895  0.02760009 -0.27678184  0.08308695 -0.0547291 ] [-0.17317317] [-0.14223553]
[ 0.25147024  0.0522454   0.02486886  0.11510253  0.03265593] [ 1.] [ 0.98869827]
[ 0.03977521 -0.06903214  0.03796262  0.09403689  0.02458351] [ 1.] [ 0.47082208]
[-0.34478934  0.26520947 -0.15590749  0.04113701 -0.00300367] [ 1.] [ 1.01930901]
[-0.08437981  0.28442028  0.19314713  0.05949096 -0.01813703] [ 1.] [ 0.98959533]
[-0.01543183 -0.11761898  0.06478489  0.07507399 -0.00886165] [-0.7997998] [-0.36437875]
[-0.147665    0.049168   -0.06538418 -0.00584206 -0.02632643] [ 0.47347347] [ 0.32563004]
[ 0.02441503 -0.1699608   0.05092713 -0.02245458 -0.00107661] [-1.] [-0.90374256]
[ 0.14030728  0.06652304  0.04019052  0.00865063  0.0332155 ] [ 1.] [ 0.9777674]
[-0.04854667  0.19268431  0.12686373 -0.00477267 -0.00366835] [ 1.] [ 0.95878839]
[ 0.14449928  0.04155529  0.23186591  0.01521891  0.03536285] [ 1.] [ 0.94144684]
[-0.04972338 -0.09333049  0.195611   -0.03597476  0.02794989] [-1.] [-0.93327958]
[ 0.0649419  -0.02901023  0.03064615 -0.03862747 -0.00013708] [-1.] [-0.87645674]
[ 0.11407232 -0.05903258  0.02590225 -0.0141109   0.01488021] [-1.] [-0.69223594]
[ 0.07999296  0.02683351  0.0958427   0.03251879  0.02420397] [ 1.] [ 0.71611585]
[ 0.13212116  0.19791227  0.16534859  0.02255834  0.03551369] [ 1.] [ 1.22114539]
  • We can also compute functions as output
    • $f(x) = -x$
    • But, since $x$ is a function, this is really:
    • $f(x(v))=-x(v)$
  • This one's really easy, since we can do it right on the basis functions

In [301]:
def negative(x):
    return -x

fx = numpy.array([negative(xx) for xx in x.T]).T
    
decoder = ensemble.compute_decoder(x, fx, noise=0.01)

A, fxhat = ensemble.simulate_rate(x, decoder)

figure(figsize=(10,4))
subplot(1, 2, 1)
for i in range(S):
    plot(v, numpy.polynomial.legendre.legval(v, x[:,i])*numpy.sqrt((2*i + 1)/2.0))
xlabel('$v$')
ylim(-3, 3)
title('original')
subplot(1, 2, 2)
for i in range(S):
    plot(v, numpy.polynomial.legendre.legval(v, fxhat[:,i])*numpy.sqrt((2*i + 1)/2.0))
xlabel('$v$')
ylim(-3, 3)
title('decoded')
show()


  • So it's all the same as normal NEF
  • What functions $x$ we're good at representing and what functions $f(x)$ we're good at computing depends on the basis functions
  • How do we find these for things other than the polynomials?

Making your own othonormal basis functions

  • Let's go back to the vision example
  • We need a set of basis functions that will be good at encoding $x$ (images) and $e$ (the gabor-shaped tuning curves for the neurons)
  • So, given a bunch of functions, we need to find an orthonormal basis space
  • How do we do that?
  • make a big set of the functions, evaluated at some $dv$
  • do singular value decomposition
  • let's start with a simple set of tuning curves: gaussian bumps

In [187]:
N = 200

v = numpy.linspace(-1, 1, 1000)
def make_gaussian(centre, sigma):
    return numpy.exp(-(v-centre)**2/(2*sigma**2))

encoders = numpy.array([make_gaussian(centre=random.uniform(-1, 1), 
                          sigma=0.1) for i in range(N)])
plot(v, encoders[:20].T)
xlabel('$v$')
show()


  • Now do SVD

In [192]:
U, S, V = numpy.linalg.svd(encoders.T)

plot(v, U[:,:10])
xlabel('$v$')
show()


  • Let's make sure they're orthogonal

In [198]:
import numpy
v = numpy.linspace(-1, 1, 1000)
i = 0
j = 1
b_i = U[:,i]
b_j = U[:,j]

import pylab
pylab.plot(v, b_i, label='$b_i$', linewidth=3)
pylab.plot(v, b_j, label='$b_j$', linewidth=3)
pylab.plot(v, b_i*b_j, label='$b_i b_j$')
pylab.legend(loc='best')
xlabel('$v$')
pylab.show()
dv = 2.0/len(v)
print 'sum:', numpy.sum(b_i*b_j*dv)


sum: -2.50635984098e-20
  • Are they normalized?

In [209]:
v = numpy.linspace(-1, 1, 1000)
dv = 2.0/len(v)

length = []
for i in range(10):
    b_i = U[:,i]
    length.append(sum(b_i*b_i)*dv)

plot(length)  
xlabel('$i$')
ylim(0, 0.004)
show()


  • Not quite
  • Missing the $dv$ factor

In [224]:
v = numpy.linspace(-1, 1, 1000)
dv = 2.0/len(v)

basis = U/(math.sqrt(dv))

length = []
for i in range(10):
    b_i = basis[:,i]
    length.append(sum(b_i*b_i)*dv)

plot(length)  
xlabel('$i$')
ylim(0, 2)
show()


  • Here's what they look like after being scaled

In [228]:
plot(v, basis[:, :10])
xlabel('$v$')
show()


  • How many dimensions should we use?
  • The SVD tells us that too
  • The singular values indicate how important each basis function is for accurately representing the desired functions

In [223]:
plot(S)
show()


  • So we'll pick the first 20 or so to be our basis
  • In order to use these, we need to convert our encoders $e$ and our samples $x$ to be coefficients in this basis space

In [288]:
bases = 20
figure(figsize=(10,4))
subplot(1, 2, 1)
plot(v, encoders.T[:,:20])
ylim(0,1)
title('original')

subplot(1, 2, 2)
basis = U[:,:bases]/(math.sqrt(dv))
e = numpy.dot(encoders, basis)*dv
plot(v, numpy.dot(basis, e.T)[:,:20])
ylim(0,1)
title('bases=%d'%bases)
show()


  • That gives us 20-dimensional encoders that represent these gaussian functions well
  • We can do the same thing to some random $x$ functions we want these neurons to represent

In [289]:
S = 100
import syde556
x = numpy.array([syde556.generate_signal(2.0, 2.0/1000, 
                  rms=0.5, limit=2, seed=i)[0] for i in range(S)])

plot(v, x[:5].T)
xlabel('$v$')
show()


  • Here's how well that basis space represents those functions

In [290]:
figure(figsize=(10,4))
subplot(1, 2, 1)
plot(v, x.T[:,:20])
title('original')

subplot(1, 2, 2)
basis = U[:,:bases]/(math.sqrt(dv))
x_vector = numpy.dot(x, basis)*dv
plot(v, numpy.dot(basis, x_vector.T)[:,:20])
title('bases=%d'%bases)
show()


  • Now we can do standard NEF
    • The coefficients of the encoder functions $e$ are our encoders
    • The coefficients of the sample functions $x$ are our samples

In [332]:
N = 200
bases = 20
S = 200 

encoders = numpy.array([make_gaussian(centre=random.uniform(-1, 1), 
                          sigma=0.1) for i in range(N)])

U, Sing, V = numpy.linalg.svd(encoders.T)

basis = U[:,:bases]/(math.sqrt(dv))
e = numpy.dot(encoders, basis)*dv

x_func = numpy.array([syde556.generate_signal(2.0, 2.0/1000, 
                  rms=0.5, limit=2, seed=i)[0] for i in range(S)])

x = numpy.dot(x_func, basis).T*dv

ensemble = syde556.Ensemble(neurons=N, dimensions=bases, encoders=e)

decoder = ensemble.compute_decoder(x, x, noise=0.1)

A, xhat = ensemble.simulate_rate(x, decoder)


figure(figsize=(10,4))
subplot(1, 2, 1)
plot(v, x_func.T[:,:20])
ylim(-4, 4)

title('original')

subplot(1, 2, 2)
basis = U[:,:bases]/(math.sqrt(dv))
xhat_func = numpy.dot(basis, xhat)
plot(v, xhat_func[:,:20])
title('bases=%d'%bases)
ylim(-4, 4)
show()


  • We can also compute functions on this representation

In [347]:
N = 200
bases = 20
S = 200 

encoders = numpy.array([make_gaussian(centre=random.uniform(-1, 1), 
                          sigma=0.1) for i in range(N)])

U, Sing, V = numpy.linalg.svd(encoders.T)

basis = U[:,:bases]/(math.sqrt(dv))
e = numpy.dot(encoders, basis)*dv

x_func = numpy.array([syde556.generate_signal(2.0, 2.0/1000, 
                  rms=0.5, limit=2, seed=i)[0] for i in range(S)])

x = numpy.dot(x_func, basis).T*dv

ensemble = syde556.Ensemble(neurons=N, dimensions=bases, encoders=e)

def my_function(x):
    value = numpy.dot(x, basis.T)
    value = -value
    #value = numpy.hstack([value[100:], value[:100]])
    return numpy.dot(value, basis)*dv
    
    
fx = numpy.array([my_function(xx) for xx in x.T]).T    


decoder = ensemble.compute_decoder(x, fx, noise=0.1)

A, fxhat = ensemble.simulate_rate(x, decoder)


figure(figsize=(10,4))
subplot(1, 2, 1)
plot(v, x_func.T[:,:20])
ylim(-4, 4)

title('original')

subplot(1, 2, 2)
basis = U[:,:bases]/(math.sqrt(dv))
fxhat_func = numpy.dot(basis, fxhat)
plot(v, fxhat_func[:,:20])
title('bases=%d'%bases)
ylim(-4, 4)
show()



In [ ]: